Goto

Collaborating Authors

 biased regularization and fine tuning


The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning

Neural Information Processing Systems

Biased regularization and fine tuning are two recent meta-learning approaches. They have been shown to be effective to tackle distributions of tasks, in which the tasks' target vectors are all close to a common meta-parameter vector. However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks' distribution cannot be captured by a single meta-parameter vector. We address this limitation by conditional meta-learning, inferring a conditioning function mapping task's side information into a meta-parameter vector that is appropriate for that task at hand. We characterize properties of the environment under which the conditional approach brings a substantial advantage over standard meta-learning and we highlight examples of environments, such as those with multiple clusters, satisfying these properties. We then propose a convex meta-algorithm providing a comparable advantage also in practice. Numerical experiments confirm our theoretical findings.


Review for NeurIPS paper: The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning

Neural Information Processing Systems

However, this sentence does feel a bit problematic. Fine tuning is simply an adaptation method, and biased regularization is a regularizer for meta-learning. I'd say at best, these are building blocks for meta-learning algorithms.


Review for NeurIPS paper: The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning

Neural Information Processing Systems

Thank you for bringing up the shortcomings of the initial reviewers. I'm disappointed that they did not seem to evaluate the core contributions of the paper, which are theoretical in nature. After the rebuttal, I sought out and found two emergency reviewers to the paper who are better suited to review this paper. The two new reviewers (whose reviews should be visible) both scored the paper above the bar. I generally agree with their assessment, as well as their feedback on the paper.


The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning

Neural Information Processing Systems

Biased regularization and fine tuning are two recent meta-learning approaches. They have been shown to be effective to tackle distributions of tasks, in which the tasks' target vectors are all close to a common meta-parameter vector. However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks' distribution cannot be captured by a single meta- parameter vector. We address this limitation by conditional meta-learning, inferring a conditioning function mapping task's side information into a meta-parameter vector that is appropriate for that task at hand. We characterize properties of the environment under which the conditional approach brings a substantial advantage over standard meta-learning and we highlight examples of environments, such as those with multiple clusters, satisfying these properties. We then propose a convex meta-algorithm providing a comparable advantage also in practice.